翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

dynamic priority scheduling : ウィキペディア英語版
dynamic priority scheduling
Definition:
Dynamic scheduling is a method in which the hardware determines which instructions to execute, as opposed to a statically scheduled machine, in which the compiler determines the order of execution. In essence, the processor is executing instructions out of order
A major driving force in the microprocessor industry is the never ending desire to miniaturize things. Smaller transistors require less voltage to operate and thus consume less power and produce less heat. Smaller interconnect distances also allow for faster clock speeds. Perhaps most important of all, smaller die areas lead to cheaper processors since more chips can fit in a single wafer. The first microprocessor made by Intel was the 4004, which had 2300 transistors. Today's chips, on the other hand, incorporate 5 to 20 million transistors. So what do they do with all those transistors?
A major hog of real estate is, of course, caches. Caches, and any IC (integrated circuit) based memory device, must have many wires running to and from the read and write ports. For on-chip caches, the load/store unit in the CPU must be able to access every location in the cache from both the read and write ports. The situation is even worse when there are more than one load/store units. That's a lot of wires! In the Pentium Pro, for example, a single package includes both the CPU chip and a L2 cache chip; the CPU chip has about 5 million transistors, while the cache chip has about 15 million transistors.
However, even accounting for caches, there is still a large increase in the number of transistors in today's chips compared to the 4004. Obviously, microprocessors are becoming increasingly more complex. We can understand this increasing complexity since chip designers want to create fast processors which are at the same time affordable. As process technology improved and more transistors could be fitted in the same die area, it became cost effective to add newer or improved features to the processor in an attempt to increase its effective speed. One of these improvements is dynamic scheduling.
As its name implies, is a method in which the hardware determines which instructions to execute, as opposed to a statically scheduled machine, in which the compiler determines the order of execution. In essence, the processor is executing instructions out of order. Dynamic scheduling is akin to a data flow machine, in which instructions don't execute based on the order in which they appear, but rather on the availability of the source operands. Of course, a real processor also has to take into account the limited amount of resources available. Thus instructions execute based on the availability of the source operands as well as the availability of the requested functional units.
Dynamically scheduled machines can take advantage of parallelism which would not be visible at compile time. They are also more versatile as code does not necessarily have to be recompiled to run efficiently since the hardware takes care of much of the scheduling. In a statically scheduled machine, code would have to be recompiled to take advantage of the machine's particular hardware. (All of this is assuming the machines use the same instruction set architecture. Of course, the code would have to be recompiled no matter what if the machines used different ISAs.)
Dynamic priority scheduling is a type of scheduling algorithm in which the priorities are calculated during the execution of the system. The goal of dynamic priority scheduling is to adapt to dynamically changing progress and form an optimal configuration in self-sustained manner. It can be very hard to produce well-defined policies to achieve the goal depending on the difficulty of a given problem.
Earliest deadline first scheduling and Least slack time scheduling are examples of Dynamic priority scheduling algorithms.
==Measurement on effectiveness of scheduling==
The idea of dynamic priority scheduling is to confine focus on algorithms that assign priorities based on temporal parameters and maximization of resource utilization; This utilization measurement of a certain scheduling, called schedulable utilization, is scaled from 0 to 1, and the higher the schedulable utilization means the better the algorithm. Every set of periodic tasks with total utilization less or equal than the schedulable utilization of an algorithm can be feasibly scheduled by that algorithm.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「dynamic priority scheduling」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.